# High Recall Rate
Marqo Fashionsiglip ST
Apache-2.0
Marqo-FashionSigLIP is a multimodal embedding model optimized for fashion product search, achieving a 57% improvement in MRR and recall rate compared to FashionCLIP.
Image-to-Text
Transformers English

M
pySilver
3,586
0
Distilhubert Finetuned Babycry V6
Apache-2.0
A baby cry classification model fine-tuned based on distilhubert, performing well on audio classification tasks
Audio Classification
Transformers

D
Wiam
14
0
Roberta Base Legal Multi Downstream Indian Ner
A RoBERTa model pre-trained on multilingual legal texts and fine-tuned for Indian legal named entity recognition tasks
Sequence Labeling
Transformers

R
MHGanainy
66
2
Monobert Legal French
MIT
French text classification model based on CamemBERT architecture, specifically designed for paragraph reordering tasks in the legal domain
Text Classification French
M
maastrichtlawtech
802
1
Crossencoder Xlm Roberta Base Mmarcofr
MIT
This is a French cross-encoder model based on XLM-RoBERTa, specifically designed for passage re-ranking tasks in semantic search.
Text Embedding French
C
antoinelouis
51
0
Dragon Multiturn Query Encoder
Other
Dragon-multiturn is a retriever specifically designed for conversational question-answering scenarios, capable of handling dialogue queries that combine conversation history with current queries.
Question Answering System
Transformers English

D
nvidia
710
59
Colbert Xm
MIT
ColBERT-XM is a multilingual passage retrieval model based on the ColBERT architecture, supporting sentence similarity calculation and passage retrieval tasks in multiple languages.
Text Embedding
Safetensors Supports Multiple Languages
C
antoinelouis
29.07k
61
Beto Sentiment Analysis Spanish
A sentiment analysis model based on BETO (Spanish version of BERT), supporting sentiment classification for Spanish text.
Text Classification
Transformers Spanish

B
ignacio-ave
1,708
6
Msmarco Bert Base Dot V5 Fine Tuned AI
A semantic search model based on BERT architecture, optimized for information retrieval systems, capable of mapping text to a 768-dimensional vector space
Text Embedding
Transformers English

M
Adel-Elwan
18
0
Cotmae Base Msmarco Reranker
MS-Marco passage re-ranking model trained on CoT-MAE architecture to enhance dense passage retrieval performance
Text Embedding
Transformers

C
caskcsg
16
1
Roberta Base Tweetner7 All
A named entity recognition model fine-tuned on the tweetner7 dataset based on roberta-base, specifically designed for entity recognition in Twitter text.
Sequence Labeling
Transformers

R
tner
30
0
Roberta Large Tweetner7 All
A named entity recognition model fine-tuned on the tner/tweetner7 dataset based on roberta-large, specifically designed for entity recognition in Twitter text
Sequence Labeling
Transformers

R
tner
170.06k
1
Splade Cocondenser Ensembledistil
SPLADE model for passage retrieval, improving sparse neural information retrieval through knowledge distillation
Text Embedding
Transformers English

S
naver
606.73k
42
Splade Cocondenser Selfdistil
SPLADE model for passage retrieval, improving retrieval effectiveness through sparse latent document expansion and knowledge distillation techniques
Text Embedding
Transformers English

S
naver
16.11k
10
Gpt2 Finetuned Comp2
MIT
Fine-tuned model based on the GPT-2 architecture, optimized for specific tasks
Large Language Model
Transformers

G
brad1141
75
0
Marker Associations Binary Base
MIT
A binary classification model fine-tuned on biomedical text using PubMedBERT, designed for marker association classification tasks
Text Classification
Transformers

M
jambo
16
0
Featured Recommended AI Models